65 research outputs found

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    HabitatDyn Dataset: Dynamic Object Detection to Kinematics Estimation

    Full text link
    The advancement of computer vision and machine learning has made datasets a crucial element for further research and applications. However, the creation and development of robots with advanced recognition capabilities are hindered by the lack of appropriate datasets. Existing image or video processing datasets are unable to accurately depict observations from a moving robot, and they do not contain the kinematics information necessary for robotic tasks. Synthetic data, on the other hand, are cost-effective to create and offer greater flexibility for adapting to various applications. Hence, they are widely utilized in both research and industry. In this paper, we propose the dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels, and depth information, as well as kinetics information. HabitatDyn was created from the perspective of a mobile robot with a moving camera, and contains 30 scenes featuring six different types of moving objects with varying velocities. To demonstrate the usability of our dataset, two existing algorithms are used for evaluation and an approach to estimate the distance between the object and camera is implemented based on these segmentation methods and evaluated through the dataset. With the availability of this dataset, we aspire to foster further advancements in the field of mobile robotics, leading to more capable and intelligent robots that can navigate and interact with their environments more effectively. The code is publicly available at https://github.com/ignc-research/HabitatDyn.Comment: The paper is under revie

    Generating Images with Physics-Based Rendering for an Industrial Object Detection Task: Realism versus Domain Randomization

    Get PDF
    Limited training data is one of the biggest challenges in the industrial application of deep learning. Generating synthetic training images is a promising solution in computer vision; however, minimizing the domain gap between synthetic and real-world images remains a problem. Therefore, based on a real-world application, we explored the generation of images with physics-based rendering for an industrial object detection task. Setting up the render engine’s environment requires a lot of choices and parameters. One fundamental question is whether to apply the concept of domain randomization or use domain knowledge to try and achieve photorealism. To answer this question, we compared different strategies for setting up lighting, background, object texture, additional foreground objects and bounding box computation in a data-centric approach. We compared the resulting average precision from generated images with different levels of realism and variability. In conclusion, we found that domain randomization is a viable strategy for the detection of industrial objects. However, domain knowledge can be used for object-related aspects to improve detection performance. Based on our results, we provide guidelines and an open-source tool for the generation of synthetic images for new industrial applications

    Mono Video-Based AI Corridor for Model-Free Detection of Collision-Relevant Obstacles

    Full text link
    The detection of previously unseen, unexpected obstacles on the road is a major challenge for automated driving systems. Different from the detection of ordinary objects with pre-definable classes, detecting unexpected obstacles on the road cannot be resolved by upscaling the sensor technology alone (e.g., high resolution video imagers / radar antennas, denser LiDAR scan lines). This is due to the fact, that there is a wide variety in the types of unexpected obstacles that also do not share a common appearance (e.g., lost cargo as a suitcase or bicycle, tire fragments, a tree stem). Also adding object classes or adding \enquote{all} of these objects to a common \enquote{unexpected obstacle} class does not scale. In this contribution, we study the feasibility of using a deep learning video-based lane corridor (called \enquote{AI ego-corridor}) to ease the challenge by inverting the problem: Instead of detecting a previously unseen object, the AI ego-corridor detects that the ego-lane ahead ends. A smart ground-truth definition enables an easy feature-based classification of an abrupt end of the ego-lane. We propose two neural network designs and research among other things the potential of training with synthetic data. We evaluate our approach on a test vehicle platform. It is shown that the approach is able to detect numerous previously unseen obstacles at a distance of up to 300 m with a detection rate of 95 %

    Arena-Rosnav 2.0: A Development and Benchmarking Platform for Robot Navigation in Highly Dynamic Environments

    Full text link
    Following up on our previous works, in this paper, we present Arena-Rosnav 2.0 an extension to our previous works Arena-Bench and Arena-Rosnav, which adds a variety of additional modules for developing and benchmarking robotic navigation approaches. The platform is fundamentally restructured and provides unified APIs to add additional functionalities such as planning algorithms, simulators, or evaluation functionalities. We have included more realistic simulation and pedestrian behavior and provide a profound documentation to lower the entry barrier. We evaluated our system by first, conducting a user study in which we asked experienced researchers as well as new practitioners and students to test our system. The feedback was mostly positive and a high number of participants are utilizing our system for other research endeavors. Finally, we demonstrate the feasibility of our system by integrating two new simulators and a variety of state of the art navigation approaches and benchmark them against one another. The platform is openly available at https://github.com/Arena-Rosnav.Comment: 8 pages, 5 figure
    corecore